2,311 research outputs found
C++ const and Immutability: An Empirical Study of Writes-Through-const (Artifact)
This artifact is based on ConstSanitizer, a dynamic program analysis tool that detects deep immutability violations through const qualifiers. Our tool instruments any code compiled by clang with the -fsanitizer=const flag. Our implementation includes both instrumentation of LLVM code and a runtime library to support our analysis. The provided package includes our tool and all experiments used in our companion paper. Instructions are also provided
C++ const and Immutability: An Empirical Study of Writes-Through-const
The ability to specify immutability in a programming language is a
powerful tool for developers, enabling them to better understand and
more safely transform their code without fearing unintended changes to program state. The C++ programming language allows developers to
specify a form of immutability using the const keyword. In this work, we characterize the meaning of the C++ const qualifier and present the ConstSanitizer tool, which dynamically verifies a stricter form of immutability than that defined in C++: it identifies const uses that are either not consistent with transitive immutability, that write to mutable fields, or that write to formerly-const objects whose const-ness has been cast away.
We evaluate a set of 7 C++ benchmark programs to find writes-through-const, establish root causes for how they fail to respect our stricter definition of immutability, and assign attributes
to each write (namely: synchronized, not visible, buffer/cache,
delayed initialization, and incorrect). ConstSanitizer finds 17
archetypes for writes in these programs which do not respect our
version of immutability. Over half of these seem unnecessary to us.
Our classification and observations of behaviour in practice
contribute to the understanding of a widely-used C++ language feature
Approach to the Evaluation and Management of Wide Complex Tachycardias
Wide complex tachycardia (WCT) refers to a cardiac rhythm of more than 100 beats per minute with a QRS duration of 120 ms or more on the surface electrocardiogram (ECG). It often presents a diagnostic dilemma for the physician particularly in determining its site of origin, which can be ventricular or supraventricular. In one series, only 32% of clinicians correctly diagnosed ventricular tachycardia (VT) in patients who presented with WCT1. Prompt diagnosis of the etiology of WCT is, however, essential since immediate care is frequently required. Diagnostic and therapeutic errors can produce poor outcome especially when ventricular tachycardia is not recognized.2,3 WCT that is grossly irregular typically represents atrial fibrillation with aberrant conduction or preexcitation. If its rate exceeds 200 beats per minute, the likelihood of atrial fibrillation with conduction over an accessory pathway should be entertained. In this article, we will discuss the approach to the evaluation and management of regular WCT
Do Android Taint Analysis Tools Keep Their Promises?
In recent years, researchers have developed a number of tools to conduct
taint analysis of Android applications. While all the respective papers aim at
providing a thorough empirical evaluation, comparability is hindered by varying
or unclear evaluation targets. Sometimes, the apps used for evaluation are not
precisely described. In other cases, authors use an established benchmark but
cover it only partially. In yet other cases, the evaluations differ in terms of
the data leaks searched for, or lack a ground truth to compare against. All
those limitations make it impossible to truly compare the tools based on those
published evaluations.
We thus present ReproDroid, a framework allowing the accurate comparison of
Android taint analysis tools. ReproDroid supports researchers in inferring the
ground truth for data leaks in apps, in automatically applying tools to
benchmarks, and in evaluating the obtained results. We use ReproDroid to
comparatively evaluate on equal grounds the six prominent taint analysis tools
Amandroid, DIALDroid, DidFail, DroidSafe, FlowDroid and IccTA. The results are
largely positive although four tools violate some promises concerning features
and accuracy. Finally, we contribute to the area of unbiased benchmarking with
a new and improved version of the open test suite DroidBench
BCFA: Bespoke Control Flow Analysis for CFA at Scale
Many data-driven software engineering tasks such as discovering programming
patterns, mining API specifications, etc., perform source code analysis over
control flow graphs (CFGs) at scale. Analyzing millions of CFGs can be
expensive and performance of the analysis heavily depends on the underlying CFG
traversal strategy. State-of-the-art analysis frameworks use a fixed traversal
strategy. We argue that a single traversal strategy does not fit all kinds of
analyses and CFGs and propose bespoke control flow analysis (BCFA). Given a
control flow analysis (CFA) and a large number of CFGs, BCFA selects the most
efficient traversal strategy for each CFG. BCFA extracts a set of properties of
the CFA by analyzing the code of the CFA and combines it with properties of the
CFG, such as branching factor and cyclicity, for selecting the optimal
traversal strategy. We have implemented BCFA in Boa, and evaluated BCFA using a
set of representative static analyses that mainly involve traversing CFGs and
two large datasets containing 287 thousand and 162 million CFGs. Our results
show that BCFA can speedup the large scale analyses by 1%-28%. Further, BCFA
has low overheads; less than 0.2%, and low misprediction rate; less than 0.01%.Comment: 12 page
Exploring the Verifiability of Code Generated by GitHub Copilot
GitHub's Copilot generates code quickly. We investigate whether it generates
good code. Our approach is to identify a set of problems, ask Copilot to
generate solutions, and attempt to formally verify these solutions with Dafny.
Our formal verification is with respect to hand-crafted specifications. We have
carried out this process on 6 problems and succeeded in formally verifying 4 of
the created solutions. We found evidence which corroborates the current
consensus in the literature: Copilot is a powerful tool; however, it should not
be "flying the plane" by itself.Comment: HATRA workshop at SPLASH 202
Putting the Semantics into Semantic Versioning
The long-standing aspiration for software reuse has made astonishing strides
in the past few years. Many modern software development ecosystems now come
with rich sets of publicly-available components contributed by the community.
Downstream developers can leverage these upstream components, boosting their
productivity.
However, components evolve at their own pace. This imposes obligations on and
yields benefits for downstream developers, especially since changes can be
breaking, requiring additional downstream work to adapt to. Upgrading too late
leaves downstream vulnerable to security issues and missing out on useful
improvements; upgrading too early results in excess work. Semantic versioning
has been proposed as an elegant mechanism to communicate levels of
compatibility, enabling downstream developers to automate dependency upgrades.
While it is questionable whether a version number can adequately characterize
version compatibility in general, we argue that developers would greatly
benefit from tools such as semantic version calculators to help them upgrade
safely. The time is now for the research community to develop such tools: large
component ecosystems exist and are accessible, component interactions have
become observable through automated builds, and recent advances in program
analysis make the development of relevant tools feasible. In particular,
contracts (both traditional and lightweight) are a promising input to semantic
versioning calculators, which can suggest whether an upgrade is likely to be
safe.Comment: to be published as Onward! Essays 202
- …